2 research outputs found

    Market-based Coordination in Dynamic Environments Based on Hoplites Framework

    Get PDF
    This work focuses on multi-robot coordination based on the Hoplites framework for solving the multi-robot task allocation (MRTA) problem. Three variations of increasing complexity for the MRTA problem, spatial task allocation based on distance, spatial task allocation based on time and distance and persistent coverage have been studied in this work. The Fast Marching Method (FMM) has been used for robot path planning and providing estimates of the plans that robots bid on, in the context of the market. The use of this framework for solving the persistent coverage problem provides interesting insights by taking a high-level approach that is different from the commonly used solutions to this problem such as computing robot trajectories to keep the desired coverage level. A high fidelity simulation tool, Webots, along with the Robotic Operating System (ROS) have been utilized to provide our simulations with similar complexity to the real world tests. Results confirm that this pipeline is a very effective tool for our evaluations given that our simulations closely follow the results in reality. By modifying the replanning to prevent having costly or invalid plans by means of priority planning and turn taking, and basing the coordination on maximum plan length as opposed to time, we have been able to make improvements and adapt the Hoplites framework to our applications. The proposed approach is able to solve the spatial task allocation and persistent coverage problems in general. However, there exist some limitations. Particularly, in the case of persistent coverage, this method is suitable for applications where moderate spatial resolutions are sufficient such as patrolling

    Real-time camera pose estimation for sports fields

    No full text
    Given an image sequence featuring a portion of a sports field filmed by a moving and uncalibrated camera, such as the one of the smartphones, our goal is to compute automatically in real time the focal length and extrinsic camera parameters for each image in the sequence without using a priori knowledges of the position and orientation of the camera.To this end, we propose a novel framework that combines accurate localization and robust identification of specific keypoints in the image by using a fully convolutional deep architecture.Our algorithm exploits both the field lines and the players’ image locations, assuming their ground plane positions to be given, to achieve accuracy and robustness that is beyond the current state of the art.We will demonstrate its effectiveness on challenging soccer, basketball, and volleyball benchmark datasets
    corecore